382 research outputs found

    Colour analysis of degraded parchment

    Get PDF
    Multispectral imaging was employed to collect data on the degradation of an 18th century parchment by a series of physical and chemical treatments. Each sample was photographed before and after treatment by a monochrome digital camera with 21 narrow-band filters. A template-matching technique was used to detect the circular holes in each sample and a four-point projective transform to register the 21 images. Colour accuracy was verified by comparison of reconstructed spectra with measurements by spectrophotometer

    Plausible Shading Decomposition For Layered Photo Retouching

    Get PDF
    Photographers routinely compose multiple manipulated photos of the same scene (layers) into a single image, which is better than any individual photo could be alone. Similarly, 3D artists set up rendering systems to produce layered images to contain only individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, both approaches either take considerable time to capture, or remain limited to synthetic scenes. In this paper, we suggest a system to allow decomposing a single image into a plausible shading decomposition (PSD) that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. This decomposition can then be manipulated in any off-the-shelf image manipulation software and recomposited back. We perform such a decomposition by learning a convolutional neural network trained using synthetic data. We demonstrate the effectiveness of our decomposition on synthetic (i.e., rendered) and real data (i.e., photographs), and use them for common photo manipulation, which are nearly impossible to perform otherwise from single images

    Progressive Refinement Imaging

    Get PDF
    This paper presents a novel technique for progressive online integration of uncalibrated image sequences with substantial geometric and/or photometric discrepancies into a single, geometrically and photometrically consistent image. Our approach can handle large sets of images, acquired from a nearly planar or infinitely distant scene at different resolutions in object domain and under variable local or global illumination conditions. It allows for efficient user guidance as its progressive nature provides a valid and consistent reconstruction at any moment during the online refinement process. // Our approach avoids global optimization techniques, as commonly used in the field of image refinement, and progressively incorporates new imagery into a dynamically extendable and memory‐efficient Laplacian pyramid. Our image registration process includes a coarse homography and a local refinement stage using optical flow. Photometric consistency is achieved by retaining the photometric intensities given in a reference image, while it is being refined. Globally blurred imagery and local geometric inconsistencies due to, e.g. motion are detected and removed prior to image fusion. // We demonstrate the quality and robustness of our approach using several image and video sequences, including handheld acquisition with mobile phones and zooming sequences with consumer cameras

    Neural BRDF Representation and Importance Sampling

    Get PDF
    Controlled capture of real-world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high-fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network-based representation of BRDF data that combines high-accuracy reconstruction with efficient practical rendering via built-in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models

    Learning on the Edge: Investigating Boundary Filters in CNNs

    Get PDF
    Convolutional neural networks (CNNs) handle the case where filters extend beyond the image boundary using several heuristics, such as zero, repeat or mean padding. These schemes are applied in an ad-hoc fashion and, being weakly related to the image content and oblivious of the target task, result in low output quality at the boundary. In this paper, we propose a simple and effective improvement that learns the boundary handling itself. At training-time, the network is provided with a separate set of explicit boundary filters. At testing-time, we use these filters which have learned to extrapolate features at the boundary in an optimal way for the specific task. Our extensive evaluation, over a wide range of architectural changes (variations of layers, feature channels, or both), shows how the explicit filters result in improved boundary handling. Furthermore, we investigate the efficacy of variations of such boundary filters with respect to convergence speed and accuracy. Finally, we demonstrate an improvement of 5–20% across the board of typical CNN applications (colorization, de-Bayering, optical flow, disparity estimation, and super-resolution). Supplementary material and code can be downloaded from the project page: http://geometry.cs.ucl.ac.uk/projects/2019/investigating-edge/

    High-Dynamic-Range Lighting Estimation From Face Portraits.

    Get PDF
    We present a CNN-based method for outdoor highdynamic-range (HDR) environment map prediction from low-dynamic-range (LDR) portrait images. Our method relies on two different CNN architectures, one for light encoding and another for face-to-light prediction. Outdoor lighting is characterised by an extremely high dynamic range, and thus our encoding splits the environment map data between low and high-intensity components, and encodes them using tailored representations. The combination of both network architectures constitutes an end-to-end method for accurate HDR light prediction from faces at real-time rates, inaccessible for previous methods which focused on low dynamic range lighting or relied on non-linear optimisation schemes. We train our networks using both real and synthetic images, we compare our light encoding with other methods for light representation, and we analyse our results for light prediction on real images. We show that our predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality

    Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction

    Get PDF
    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole

    An integer representation for periodic tilings of the plane by regular polygons

    Get PDF
    We describe a representation for periodic tilings of the plane by regular polygons. Our approach is to represent explicitly a small subset of seed vertices from which we systematically generate all elements of the tiling by translations. We represent a tiling concretely by a (2+n)×4 integer matrix containing lattice coordinates for two translation vectors and n seed vertices. We discuss several properties of this representation and describe how to exploit the representation elegantly and efficiently for reconstruction, rendering, and automatic crystallographic classification by symmetry detection

    Interactive Exploration and Flattening of Deformed Historical Documents

    Get PDF
    We present an interactive application for browsing severely damaged documents and other cultural artefacts. Such documents often contain strong geometric distortions such as wrinkling, buckling, and shrinking and cannot be flattened physically due to the high risk of causing further damage. Previous methods for virtual restoration involve globally flattening a 3D reconstruction of the document to produce a static image. We show how this global approach can fail in cases of severe geometric distortion, and instead propose an interactive viewer which allows a user to browse a document while dynamically flattening only the local region under inspection. Our application also records the provenance of the reconstruction by displaying the reconstruction side by side with the original image data

    Learning on the Edge: Explicit Boundary Handling in CNNs

    Get PDF
    Convolutional neural networks (CNNs) handle the case where filters extend beyond the image boundary using several heuristics, such as zero, repeat or mean padding. These schemes are applied in an ad-hoc fashion and, being weakly related to the image content and oblivious of the target task, result in low output quality at the boundary. In this paper, we propose a simple and effective improvement that learns the boundary handling itself. At training-time, the network is provided with a separate set of explicit boundary filters. At testing-time, we use these filters which have learned to extrapolate features at the boundary in an optimal way for the specific task. Our extensive evaluation, over a wide range of architectural changes (variations of layers, feature channels, or both), shows how the explicit filters result in improved boundary handling. Consequently, we demonstrate an improvement of 5 % to 20 % across the board of typical CNN applications (colorization, de-Bayering, optical flow, and disparity estimation
    corecore